深度估计是某些领域的关键技术之一,例如自动驾驶和机器人导航。但是,使用单个传感器的传统方法不可避免地受到传感器的性能的限制。因此,提出了一种融合激光镜头和立体声摄像机的精度和健壮方法。该方法完全结合了LiDAR和立体声摄像机的优势,这些摄像头可以保留LIDAR高精度和图像的高分辨率的优势。与传统的立体声匹配方法相比,对象和照明条件的质地对算法的影响较小。首先,将LIDAR数据的深度转换为立体声摄像机的差异。由于LiDAR数据的密度在Y轴上相对稀疏,因此使用插值方法对转换的差异图进行了更采样。其次,为了充分利用精确的差异图,融合了差异图和立体声匹配以传播准确的差异。最后,将视差图转换为深度图。此外,转换后的差异图还可以提高算法的速度。我们在Kitti基准测试中评估了拟议的管道。该实验表明,我们的算法比几种经典方法具有更高的精度。
translated by 谷歌翻译
语言模型预训练的最新进展利用大规模数据集创建多语言模型。但是,这些数据集中大多遗漏了低资源语言。这主要是因为网络上没有很好地表示口语,因此被排除在用于创建数据集的大规模爬网中。此外,这些模型的下游用户仅限于最初选择用于预训练的语言的选择。这项工作调查了如何最佳利用现有的预培训模型来为16种非洲语言创建低资源翻译系统。我们关注两个问题:1)如何将预训练的模型用于初始预培训中未包含的语言? 2)生成的翻译模型如何有效地转移到新域?为了回答这些问题,我们创建了一个新的非洲新闻语料库,涵盖16种语言,其中8种语言不属于任何现有评估数据集的一部分。我们证明,将两种语言转移到其他语言和其他领域的最有效策略是,以少量的高质量翻译数据微调大型预训练模型。
translated by 谷歌翻译
我们呈现母语读者(NCR),这是一个新的机器阅读理解(MRC)数据集,在现代和古典中文中具有特别长的文章。来自中国高中课程的考试问题收集了NCR,旨在评估中国母语的语言能力。现有的中国MRC数据集是特定于域的或专注于现代中文中数百个字符的短文。相比之下,NCR包含8390个文件,平均长度为1024个字符,涵盖了各种中文写作风格,包括现代文章,古典文学和古典诗歌。总共有20477个关于这些文件的问题也需要强烈的推理能力和常识来弄清楚正确的答案。我们使用流行的中国预训练模型实现了多个基线模型,并使用我们的数据集推出了在线竞争,以检查当前方法的限制。最佳型号达到59%的测试精度,而人类评估则显示平均精度为79%,这表明当前MRC模型和母语扬声器之间的显着性能差距。我们在https://sites.google.com/view/native-chinese-reader/释放DataSet。
translated by 谷歌翻译
接地的情况识别(GSR),即识别图像中的显着活动(或动词)类别(例如,购买)和检测所有相应的语义角色(例如,代理商和货物),是朝向“人类”的重要步骤事件理解。由于每个动词与特定的语义角色相关联,所以所有现有的GSR方法都采用了一个两级框架:在第一阶段预测动词并检测第二阶段的语义角色。然而,两个阶段存在明显的缺点:1)由于在日常活动中的阶级内变化和高阶间相似性,对物体识别的广泛使用的跨熵(XE)损耗在动词分类中不足。 2)以自回归方式检测到所有语义角色,这不能模拟不同角色之间的复杂语义关系。为此,我们为GSR提出了一种新的Situformer,其包括粗略的动词模型(CFVM)和基于变压器的名词模型(TNM)。 CFVM是一种两步动词预测模型:具有XE损耗培训的粗粒模型首先提出了一组动词候选,然后用三态损失培训的细粒度模型重新排名这些候选者,并使用增强的动词功能(不仅可分离但也是歧视的)。 TNM是一种基于变换器的语义角色检测模型,其并行检测所有角色。由于变压器解码器的全局关系建模能力和灵活性,TNM可以完全探索角色的统计依赖性。对挑战性SWIG基准测试的广泛验证表明,Situformer在各种指标下实现了一种新的最先进的性能。代码可在https://github.com/kellyiss/situformer中获得。
translated by 谷歌翻译
虽然深度学习方法近年来取得了高级视频对象识别性能,但在视频中感知封闭对象仍然是一个非常具有挑战性的任务。为促进遮挡理解的发展,我们在遮挡方案中收集一个名为OVIS的大规模数据集,用于遮挡方案中的视频实例分段。 ovis由296K高质量的屏幕和901个遮挡场景组成。虽然我们的人类视觉系统可以通过语境推理和关联来感知那些遮挡物体,但我们的实验表明当前的视频了解系统不能。在ovis数据集上,所有基线方法都遇到了大约80%的大约80%的大约80%,这表明仍然有很长的路要走在复杂的真实情景中理解模糊物体和视频。为了促进对视频理解系统的新范式研究,我们基于OVI数据集启动了挑战。提交的顶级执行算法已经比我们的基线实现了更高的性能。在本文中,我们将介绍OVIS数据集,并通过分析基线的结果和提交的方法来进一步剖析。可以在http://songbai.site/ovis找到ovis数据集和挑战信息。
translated by 谷歌翻译
One of the key challenges in deploying RL to real-world applications is to adapt to variations of unknown environment contexts, such as changing terrains in robotic tasks and fluctuated bandwidth in congestion control. Existing works on adaptation to unknown environment contexts either assume the contexts are the same for the whole episode or assume the context variables are Markovian. However, in many real-world applications, the environment context usually stays stable for a stochastic period and then changes in an abrupt and unpredictable manner within an episode, resulting in a segment structure, which existing works fail to address. To leverage the segment structure of piecewise stable context in real-world applications, in this paper, we propose a \textit{\textbf{Se}gmented \textbf{C}ontext \textbf{B}elief \textbf{A}ugmented \textbf{D}eep~(SeCBAD)} RL method. Our method can jointly infer the belief distribution over latent context with the posterior over segment length and perform more accurate belief context inference with observed data within the current context segment. The inferred belief context can be leveraged to augment the state, leading to a policy that can adapt to abrupt variations in context. We demonstrate empirically that SeCBAD can infer context segment length accurately and outperform existing methods on a toy grid world environment and Mujuco tasks with piecewise-stable context.
translated by 谷歌翻译
Conversational recommender systems (CRSs) often utilize external knowledge graphs (KGs) to introduce rich semantic information and recommend relevant items through natural language dialogues. However, original KGs employed in existing CRSs are often incomplete and sparse, which limits the reasoning capability in recommendation. Moreover, only few of existing studies exploit the dialogue context to dynamically refine knowledge from KGs for better recommendation. To address the above issues, we propose the Variational Reasoning over Incomplete KGs Conversational Recommender (VRICR). Our key idea is to incorporate the large dialogue corpus naturally accompanied with CRSs to enhance the incomplete KGs; and perform dynamic knowledge reasoning conditioned on the dialogue context. Specifically, we denote the dialogue-specific subgraphs of KGs as latent variables with categorical priors for adaptive knowledge graphs refactor. We propose a variational Bayesian method to approximate posterior distributions over dialogue-specific subgraphs, which not only leverages the dialogue corpus for restructuring missing entity relations but also dynamically selects knowledge based on the dialogue context. Finally, we infuse the dialogue-specific subgraphs to decode the recommendation and responses. We conduct experiments on two benchmark CRSs datasets. Experimental results confirm the effectiveness of our proposed method.
translated by 谷歌翻译
Over the past few years, developing a broad, universal, and general-purpose computer vision system has become a hot topic. A powerful universal system would be capable of solving diverse vision tasks simultaneously without being restricted to a specific problem or a specific data domain, which is of great importance in practical real-world computer vision applications. This study pushes the direction forward by concentrating on the million-scale multi-domain universal object detection problem. The problem is not trivial due to its complicated nature in terms of cross-dataset category label duplication, label conflicts, and the hierarchical taxonomy handling. Moreover, what is the resource-efficient way to utilize emerging large pre-trained vision models for million-scale cross-dataset object detection remains an open challenge. This paper tries to address these challenges by introducing our practices in label handling, hierarchy-aware loss design and resource-efficient model training with a pre-trained large model. Our method is ranked second in the object detection track of Robust Vision Challenge 2022 (RVC 2022). We hope our detailed study would serve as an alternative practice paradigm for similar problems in the community. The code is available at https://github.com/linfeng93/Large-UniDet.
translated by 谷歌翻译
Recognition of facial expression is a challenge when it comes to computer vision. The primary reasons are class imbalance due to data collection and uncertainty due to inherent noise such as fuzzy facial expressions and inconsistent labels. However, current research has focused either on the problem of class imbalance or on the problem of uncertainty, ignoring the intersection of how to address these two problems. Therefore, in this paper, we propose a framework based on Resnet and Attention to solve the above problems. We design weight for each class. Through the penalty mechanism, our model will pay more attention to the learning of small samples during training, and the resulting decrease in model accuracy can be improved by a Convolutional Block Attention Module (CBAM). Meanwhile, our backbone network will also learn an uncertain feature for each sample. By mixing uncertain features between samples, the model can better learn those features that can be used for classification, thus suppressing uncertainty. Experiments show that our method surpasses most basic methods in terms of accuracy on facial expression data sets (e.g., AffectNet, RAF-DB), and it also solves the problem of class imbalance well.
translated by 谷歌翻译
Attention-based arbitrary style transfer studies have shown promising performance in synthesizing vivid local style details. They typically use the all-to-all attention mechanism: each position of content features is fully matched to all positions of style features. However, all-to-all attention tends to generate distorted style patterns and has quadratic complexity. It virtually limits both the effectiveness and efficiency of arbitrary style transfer. In this paper, we rethink what kind of attention mechanism is more appropriate for arbitrary style transfer. Our answer is a novel all-to-key attention mechanism: each position of content features is matched to key positions of style features. Specifically, it integrates two newly proposed attention forms: distributed and progressive attention. Distributed attention assigns attention to multiple key positions; Progressive attention pays attention from coarse to fine. All-to-key attention promotes the matching of diverse and reasonable style patterns and has linear complexity. The resultant module, dubbed StyA2K, has fine properties in rendering reasonable style textures and maintaining consistent local structure. Qualitative and quantitative experiments demonstrate that our method achieves superior results than state-of-the-art approaches.
translated by 谷歌翻译